Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 33
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Ear Hear ; 39(1): 101-109, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-28700448

RESUMO

OBJECTIVES: The increasing numbers of older adults now receiving cochlear implants raises the question of how the novel signal produced by cochlear implants may interact with cognitive aging in the recognition of words heard spoken within a linguistic context. The objective of this study was to pit the facilitative effects of a constraining linguistic context against a potential age-sensitive negative effect of response competition on effectiveness of word recognition. DESIGN: Younger (n = 8; mean age = 22.5 years) and older (n = 8; mean age = 67.5 years) adult implant recipients heard 20 target words as the final words in sentences that manipulated the target word's probability of occurrence within the sentence context. Data from published norms were also used to measure response entropy, calculated as the total number of different responses and the probability distribution of the responses suggested by the sentence context. Sentence-final words were presented to participants using a word-onset gating paradigm, in which a target word was presented with increasing amounts of its onset duration in 50 msec increments until the word was correctly identified. RESULTS: Results showed that for both younger and older adult implant users, the amount of word-onset information needed for correct recognition of sentence-final words was inversely proportional to their likelihood of occurrence within the sentence context, with older adults gaining differential advantage from the contextual constraints offered by a sentence context. On the negative side, older adults' word recognition was differentially hampered by high response entropy, with this effect being driven primarily by the number of competing responses that might also fit the sentence context. CONCLUSIONS: Consistent with previous research with normal-hearing younger and older adults, the present results showed older adult implant users' recognition of spoken words to be highly sensitive to linguistic context. This sensitivity, however, also resulted in a greater degree of interference from other words that might also be activated by the context, with negative effects on ease of word recognition. These results are consistent with an age-related inhibition deficit extending to the domain of semantic constraints on word recognition.


Assuntos
Implantes Cocleares , Percepção da Fala , Estimulação Acústica , Adolescente , Adulto , Fatores Etários , Idoso , Limiar Auditivo , Surdez/fisiopatologia , Surdez/reabilitação , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Semântica , Adulto Jovem
2.
J Acoust Soc Am ; 141(1): 373, 2017 01.
Artigo em Inglês | MEDLINE | ID: mdl-28147573

RESUMO

English listeners use suprasegmental cues to lexical stress during spoken-word recognition. Prosodic cues are, however, less salient in spectrally degraded speech, as provided by cochlear implants. The present study examined how spectral degradation with and without low-frequency fine-structure information affects normal-hearing listeners' ability to benefit from suprasegmental cues to lexical stress in online spoken-word recognition. To simulate electric hearing, an eight-channel vocoder spectrally degraded the stimuli while preserving temporal envelope information. Additional lowpass-filtered speech was presented to the opposite ear to simulate bimodal hearing. Using a visual world paradigm, listeners' eye fixations to four printed words (target, competitor, two distractors) were tracked, while hearing a word. The target and competitor overlapped segmentally in their first two syllables but mismatched suprasegmentally in their first syllables, as the initial syllable received primary stress in one word and secondary stress in the other (e.g., "'admiral," "'admi'ration"). In the vocoder-only condition, listeners were unable to use lexical stress to recognize targets before segmental information disambiguated them from competitors. With additional lowpass-filtered speech, however, listeners efficiently processed prosodic information to speed up online word recognition. Low-frequency fine-structure cues in simulated bimodal hearing allowed listeners to benefit from suprasegmental cues to lexical stress during word recognition.


Assuntos
Sinais (Psicologia) , Reconhecimento Psicológico , Acústica da Fala , Inteligibilidade da Fala , Percepção da Fala , Qualidade da Voz , Estimulação Acústica , Feminino , Humanos , Masculino , Estimulação Luminosa , Fatores de Tempo , Percepção Visual , Adulto Jovem
3.
J Speech Lang Hear Res ; 60(1): 190-198, 2017 01 01.
Artigo em Inglês | MEDLINE | ID: mdl-28056135

RESUMO

Purpose: We used an eye-tracking technique to investigate whether English listeners use suprasegmental information about lexical stress to speed up the recognition of spoken words in English. Method: In a visual world paradigm, 24 young English listeners followed spoken instructions to choose 1 of 4 printed referents on a computer screen (e.g., "Click on the word admiral"). Displays contained a critical pair of words (e.g., 'admiral-'admi'ration) that were segmentally identical for their first 2 syllables but differed suprasegmentally in their 1st syllable: One word began with primary lexical stress, and the other began with secondary lexical stress. All words had phrase-level prominence. Listeners' relative proportion of eye fixations on these words indicated their ability to differentiate them over time. Results: Before critical word pairs became segmentally distinguishable in their 3rd syllables, participants fixated target words more than their stress competitors, but only if targets had initial primary lexical stress. The degree to which stress competitors were fixated was independent of their stress pattern. Conclusions: Suprasegmental information about lexical stress modulates the time course of spoken-word recognition. Specifically, suprasegmental information on the primary-stressed syllable of words with phrase-level prominence helps in distinguishing the word from phonological competitors with secondary lexical stress.


Assuntos
Fonética , Percepção da Fala , Medições dos Movimentos Oculares , Fixação Ocular , Humanos , Reconhecimento Fisiológico de Modelo , Leitura , Reconhecimento Psicológico , Adulto Jovem
4.
J Acoust Soc Am ; 140(5): 3971, 2016 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-27908030

RESUMO

In simulations of electrical-acoustic stimulation (EAS), vocoded speech intelligibility is aided by preservation of low-frequency acoustic cues. However, the speech signal is often interrupted in everyday listening conditions, and effects of interruption on hybrid speech intelligibility are poorly understood. Additionally, listeners rely on information-bearing acoustic changes to understand full-spectrum speech (as measured by cochlea-scaled entropy [CSE]) and vocoded speech (CSECI), but how listeners utilize these informational changes to understand EAS speech is unclear. Here, normal-hearing participants heard noise-vocoded sentences with three to six spectral channels in two conditions: vocoder-only (80-8000 Hz) and simulated hybrid EAS (vocoded above 500 Hz; original acoustic signal below 500 Hz). In each sentence, four 80-ms intervals containing high-CSECI or low-CSECI acoustic changes were replaced with speech-shaped noise. As expected, performance improved with the preservation of low-frequency fine-structure cues (EAS). This improvement decreased for continuous EAS sentences as more spectral channels were added, but increased as more channels were added to noise-interrupted EAS sentences. Performance was impaired more when high-CSECI intervals were replaced by noise than when low-CSECI intervals were replaced, but this pattern did not differ across listening modes. Utilizing information-bearing acoustic changes to understand speech is predicted to generalize to cochlear implant users who receive EAS inputs.


Assuntos
Ruído , Estimulação Acústica , Implantes Cocleares , Mascaramento Perceptivo , Inteligibilidade da Fala , Percepção da Fala
5.
Trends Hear ; 202016 06 17.
Artigo em Inglês | MEDLINE | ID: mdl-27317666

RESUMO

Multiple redundant acoustic cues can contribute to the perception of a single phonemic contrast. This study investigated the effect of spectral degradation on the discriminability and perceptual saliency of acoustic cues for identification of word-final fricative voicing in "loss" versus "laws", and possible changes that occurred when low-frequency acoustic cues were restored. Three acoustic cues that contribute to the word-final /s/-/z/ contrast (first formant frequency [F1] offset, vowel-consonant duration ratio, and consonant voicing duration) were systematically varied in synthesized words. A discrimination task measured listeners' ability to discriminate differences among stimuli within a single cue dimension. A categorization task examined the extent to which listeners make use of a given cue to label a syllable as "loss" versus "laws" when multiple cues are available. Normal-hearing listeners were presented with stimuli that were either unprocessed, processed with an eight-channel noise-band vocoder to approximate spectral degradation in cochlear implants, or low-pass filtered. Listeners were tested in four listening conditions: unprocessed, vocoder, low-pass, and a combined vocoder + low-pass condition that simulated bimodal hearing. Results showed a negative impact of spectral degradation on F1 cue discrimination and a trading relation between spectral and temporal cues in which listeners relied more heavily on the temporal cues for "loss-laws" identification when spectral cues were degraded. Furthermore, the addition of low-frequency fine-structure cues in simulated bimodal hearing increased the perceptual saliency of the F1 cue for "loss-laws" identification compared with vocoded speech. Findings suggest an interplay between the quality of sensory input and cue importance.


Assuntos
Estimulação Acústica , Implantes Cocleares , Percepção da Fala , Implante Coclear , Sinais (Psicologia) , Humanos , Fonética
6.
J Acoust Soc Am ; 139(4): 1747, 2016 04.
Artigo em Inglês | MEDLINE | ID: mdl-27106322

RESUMO

Low-frequency acoustic cues have been shown to enhance speech perception by cochlear-implant users, particularly when target speech occurs in a competing background. The present study examined the extent to which a continuous representation of low-frequency harmonicity cues contributes to bimodal benefit in simulated bimodal listeners. Experiment 1 examined the benefit of restoring a continuous temporal envelope to the low-frequency ear while the vocoder ear received a temporally interrupted stimulus. Experiment 2 examined the effect of providing continuous harmonicity cues in the low-frequency ear as compared to restoring a continuous temporal envelope in the vocoder ear. Findings indicate that bimodal benefit for temporally interrupted speech increases when continuity is restored to either or both ears. The primary benefit appears to stem from the continuous temporal envelope in the low-frequency region providing additional phonetic cues related to manner and F1 frequency; a secondary contribution is provided by low-frequency harmonicity cues when a continuous representation of the temporal envelope is present in the low-frequency, or both ears. The continuous temporal envelope and harmonicity cues of low-frequency speech are thought to support bimodal benefit by facilitating identification of word and syllable boundaries, and by restoring partial phonetic cues that occur during gaps in the temporally interrupted stimulus.


Assuntos
Implante Coclear , Sinais (Psicologia) , Periodicidade , Pessoas com Deficiência Auditiva/reabilitação , Acústica da Fala , Percepção da Fala , Estimulação Acústica , Acústica , Adolescente , Adulto , Audiometria da Fala , Implante Coclear/instrumentação , Implantes Cocleares , Estimulação Elétrica , Humanos , Ruído/efeitos adversos , Mascaramento Perceptivo , Pessoas com Deficiência Auditiva/psicologia , Fonética , Espectrografia do Som , Inteligibilidade da Fala , Fatores de Tempo , Adulto Jovem
7.
Ear Hear ; 37(5): 582-92, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-27007220

RESUMO

OBJECTIVES: Previous studies have documented the benefits of bimodal hearing as compared with a cochlear implant alone, but most have focused on the importance of bottom-up, low-frequency cues. The purpose of the present study was to evaluate the role of top-down processing in bimodal hearing by measuring the effect of sentence context on bimodal benefit for temporally interrupted sentences. It was hypothesized that low-frequency acoustic cues would facilitate the use of contextual information in the interrupted sentences, resulting in greater bimodal benefit for the higher context (CUNY) sentences than for the lower context (IEEE) sentences. DESIGN: Young normal-hearing listeners were tested in simulated bimodal listening conditions in which noise band vocoded sentences were presented to one ear with or without low-pass (LP) filtered speech or LP harmonic complexes (LPHCs) presented to the contralateral ear. Speech recognition scores were measured in three listening conditions: vocoder-alone, vocoder combined with LP speech, and vocoder combined with LPHCs. Temporally interrupted versions of the CUNY and IEEE sentences were used to assess listeners' ability to fill in missing segments of speech by using top-down linguistic processing. Sentences were square-wave gated at a rate of 5 Hz with a 50% duty cycle. Three vocoder channel conditions were tested for each type of sentence (8, 12, and 16 channels for CUNY; 12, 16, and 32 channels for IEEE) and bimodal benefit was compared for similar amounts of spectral degradation (matched-channel comparisons) and similar ranges of baseline performance. Two gain measures, percentage-point gain and normalized gain, were examined. RESULTS: Significant effects of context on bimodal benefit were observed when LP speech was presented to the residual-hearing ear. For the matched-channel comparisons, CUNY sentences showed significantly higher normalized gains than IEEE sentences for both the 12-channel (20 points higher) and 16-channel (18 points higher) conditions. For the individual gain comparisons that used a similar range of baseline performance, CUNY sentences showed bimodal benefits that were significantly higher (7% points, or 15 points normalized gain) than those for IEEE sentences. The bimodal benefits observed here for temporally interrupted speech were considerably smaller than those observed in an earlier study that used continuous speech. Furthermore, unlike previous findings for continuous speech, no bimodal benefit was observed when LPHCs were presented to the LP ear. CONCLUSIONS: Findings indicate that linguistic context has a significant influence on bimodal benefit for temporally interrupted speech and support the hypothesis that low-frequency acoustic information presented to the residual-hearing ear facilitates the use of top-down linguistic processing in bimodal hearing. However, bimodal benefit is reduced for temporally interrupted speech as compared with continuous speech, suggesting that listeners' ability to restore missing speech information depends not only on top-down linguistic knowledge but also on the quality of the bottom-up sensory input.


Assuntos
Implantes Cocleares , Sinais (Psicologia) , Surdez/reabilitação , Percepção da Fala , Adolescente , Adulto , Implante Coclear , Simulação por Computador , Feminino , Voluntários Saudáveis , Humanos , Masculino , Adulto Jovem
8.
J Assoc Res Otolaryngol ; 16(6): 783-96, 2015 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-26362546

RESUMO

This study investigates the effect of spectral degradation on cortical speech encoding in complex auditory scenes. Young normal-hearing listeners were simultaneously presented with two speech streams and were instructed to attend to only one of them. The speech mixtures were subjected to noise-channel vocoding to preserve the temporal envelope and degrade the spectral information of speech. Each subject was tested with five spectral resolution conditions (unprocessed speech, 64-, 32-, 16-, and 8-channel vocoder conditions) and two target-to-masker ratio (TMR) conditions (3 and 0 dB). Ongoing electroencephalographic (EEG) responses and speech comprehension were measured in each spectral and TMR condition for each subject. Neural tracking of each speech stream was characterized by cross-correlating the EEG responses with the envelope of each of the simultaneous speech streams at different time lags. Results showed that spectral degradation and TMR both significantly influenced how top-down attention modulated the EEG responses to the attended and unattended speech. That is, the EEG responses to the attended and unattended speech streams differed more for the higher (unprocessed, 64 ch, and 32 ch) than the lower (16 and 8 ch) spectral resolution conditions, as well as for the higher (3 dB) than the lower TMR (0 dB) condition. The magnitude of differential neural modulation responses to the attended and unattended speech streams significantly correlated with speech comprehension scores. These results suggest that severe spectral degradation and low TMR hinder speech stream segregation, making it difficult to employ top-down attention to differentially process different speech streams.


Assuntos
Córtex Auditivo/fisiologia , Acústica da Fala , Percepção da Fala/fisiologia , Adulto , Atenção/fisiologia , Compreensão , Feminino , Voluntários Saudáveis , Humanos , Masculino , Adulto Jovem
9.
Speech Commun ; 67: 102-112, 2015 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-26150679

RESUMO

Periodicity is an important property of speech signals. It is the basis of the signal's fundamental frequency and the pitch of voice, which is crucial to speech communication. This paper presents a novel framework of periodicity enhancement for noisy speech. The enhancement is applied to the linear prediction residual of speech. The residual signal goes through a constant-pitch time warping process and two sequential lapped-frequency transforms, by which the periodic component is concentrated in certain transform coefficients. By emphasizing the respective transform coefficients, periodicity enhancement of noisy residual signal is achieved. The enhanced residual signal and estimated linear prediction filter parameters are used to synthesize the output speech. An adaptive algorithm is proposed for adjusting the weights for the periodic and aperiodic components. Effectiveness of the proposed approach is demonstrated via experimental evaluation. It is observed that harmonic structure of the original speech could be properly restored to improve the perceptual quality of enhanced speech.

10.
J Acoust Soc Am ; 137(5): 2846-57, 2015 May.
Artigo em Inglês | MEDLINE | ID: mdl-25994712

RESUMO

Low-frequency acoustic cues have shown to improve speech perception in cochlear-implant listeners. However, the mechanisms underlying this benefit are still not well understood. This study investigated the extent to which low-frequency cues can facilitate listeners' use of linguistic knowledge in simulated electric-acoustic stimulation (EAS). Experiment 1 examined differences in the magnitude of EAS benefit at the phoneme, word, and sentence levels. Speech materials were processed via noise-channel vocoding and lowpass (LP) filtering. The amount of spectral degradation in the vocoder speech was varied by applying different numbers of vocoder channels. Normal-hearing listeners were tested on vocoder-alone, LP-alone, and vocoder + LP conditions. Experiment 2 further examined factors that underlie the context effect on EAS benefit at the sentence level by limiting the low-frequency cues to temporal envelope and periodicity (AM + FM). Results showed that EAS benefit was greater for higher-context than for lower-context speech materials even when the LP ear received only low-frequency AM + FM cues. Possible explanations for the greater EAS benefit observed with higher-context materials may lie in the interplay between perceptual and expectation-driven processes for EAS speech recognition, and/or the band-importance functions for different types of speech materials.


Assuntos
Estimulação Acústica/métodos , Acústica , Sinais (Psicologia) , Reconhecimento Psicológico , Percepção da Fala , Adolescente , Adulto , Audiometria da Fala , Simulação por Computador , Humanos , Periodicidade , Fonética , Espectrografia do Som , Acústica da Fala , Fatores de Tempo , Qualidade da Voz , Adulto Jovem
11.
Augment Altern Commun ; 30(4): 298-313, 2014 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-25384797

RESUMO

Graphic symbols are a necessity for pre-literate children who use aided augmentative and alternative communication (AAC) systems (including non-electronic communication boards and speech generating devices), as well as for mobile technologies using AAC applications. Recently, developers of the Autism Language Program (ALP) Animated Graphics Set have added environmental sounds to animated symbols representing verbs in an attempt to enhance their iconicity. The purpose of this study was to examine the effects of environmental sounds (added to animated graphic symbols representing verbs) in terms of naming. Participants included 46 children with typical development between the ages of 3;0 to 3;11 (years;months). The participants were randomly allocated to a condition of symbols with environmental sounds or a condition without environmental sounds. Results indicated that environmental sounds significantly enhanced the naming accuracy of animated symbols for verbs. Implications in terms of symbol selection, symbol refinement, and future symbol development will be discussed.


Assuntos
Estimulação Acústica/métodos , Reconhecimento Visual de Modelos , Estimulação Luminosa/métodos , Vocabulário , Pré-Escolar , Auxiliares de Comunicação para Pessoas com Deficiência , Feminino , Humanos , Masculino , Reconhecimento Fisiológico de Modelo
12.
Hear Res ; 316: 73-81, 2014 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-25124153

RESUMO

This study investigates how top-down attention modulates neural tracking of the speech envelope in different listening conditions. In the quiet conditions, a single speech stream was presented and the subjects paid attention to the speech stream (active listening) or watched a silent movie instead (passive listening). In the competing speaker (CS) conditions, two speakers of opposite genders were presented diotically. Ongoing electroencephalographic (EEG) responses were measured in each condition and cross-correlated with the speech envelope of each speaker at different time lags. In quiet, active and passive listening resulted in similar neural responses to the speech envelope. In the CS conditions, however, the shape of the cross-correlation function was remarkably different between the attended and unattended speech. The cross-correlation with the attended speech showed stronger N1 and P2 responses but a weaker P1 response compared to the cross-correlation with the unattended speech. Furthermore, the N1 response to the attended speech in the CS condition was enhanced and delayed compared with the active listening condition in quiet, while the P2 response to the unattended speaker in the CS condition was attenuated compared with the passive listening in quiet. Taken together, these results demonstrate that top-down attention differentially modulates envelope-tracking neural activity at different time lags and suggest that top-down attention can both enhance the neural responses to the attended sound stream and suppress the responses to the unattended sound stream.


Assuntos
Atenção , Audição/fisiologia , Fala/fisiologia , Adulto , Audiometria , Córtex Auditivo/fisiologia , Percepção Auditiva , Eletroencefalografia , Feminino , Testes Auditivos , Humanos , Masculino , Neurônios/metabolismo , Som , Percepção da Fala/fisiologia , Adulto Jovem
13.
Int J Audiol ; 53(8): 546-57, 2014 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-24694089

RESUMO

OBJECTIVES: This study investigates the efficacy of a cochlear implant (CI) processing method that enhances temporal periodicity cues of speech. DESIGN: Subjects participated in word and tone identification tasks. Two processing conditions - the conventional advanced combination encoder (ACE) and tone-enhanced ACE were tested. Test materials were Cantonese disyllabic words recorded from one male and one female speaker. Speech-shaped noise was added to clean speech. The fundamental frequency information for periodicity enhancement was extracted from the clean speech. Electrical stimuli generated from the noisy speech with and without periodicity enhancement were presented via direct stimulation using a Laura 34 research processor. Subjects were asked to identify the presented word. STUDY SAMPLE: Seven post-lingually deafened native Cantonese-speaking CI users. RESULTS: Percent correct word, segmental structure, and tone identification scores were calculated. While word and segmental structure identification accuracy remained similar between the two processing conditions, tone identification in noise was better with tone-enhanced ACE than with conventional ACE. Significant improvement on tone perception was found only for the female voice. CONCLUSIONS: Temporal periodicity cues are important to tone perception in noise. Pitch and tone perception by CI users could be improved when listeners received enhanced temporal periodicity cues.


Assuntos
Implantes Cocleares , Percepção da Fala , Adulto , Idoso , China , Feminino , Humanos , Idioma , Masculino , Pessoa de Meia-Idade
14.
PLoS One ; 9(4): e95001, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-24747721

RESUMO

OBJECTIVE: To investigate a set of acoustic features and classification methods for the classification of three groups of fricative consonants differing in place of articulation. METHOD: A support vector machine (SVM) algorithm was used to classify the fricatives extracted from the TIMIT database in quiet and also in speech babble noise at various signal-to-noise ratios (SNRs). Spectral features including four spectral moments, peak, slope, Mel-frequency cepstral coefficients (MFCC), Gammatone filters outputs, and magnitudes of fast Fourier Transform (FFT) spectrum were used for the classification. The analysis frame was restricted to only 8 msec. In addition, commonly-used linear and nonlinear principal component analysis dimensionality reduction techniques that project a high-dimensional feature vector onto a lower dimensional space were examined. RESULTS: With 13 MFCC coefficients, 14 or 24 Gammatone filter outputs, classification performance was greater than or equal to 85% in quiet and at +10 dB SNR. Using 14 Gammatone filter outputs above 1 kHz, classification accuracy remained high (greater than 80%) for a wide range of SNRs from +20 to +5 dB SNR. CONCLUSIONS: High levels of classification accuracy for fricative consonants in quiet and in noise could be achieved using only spectral features extracted from a short time window. Results of this work have a direct impact on the development of speech enhancement algorithms for hearing devices.


Assuntos
Auxiliares de Audição , Fala , Humanos , Máquina de Vetores de Suporte
15.
Ear Hear ; 34(3): 300-12, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-23165224

RESUMO

OBJECTIVES: This study describes a vocoder-based frequency-lowering system that enhances spectral cues for nonsonorant consonants differing in place of articulation. The goal of this study was to evaluate the efficacy of this system for speech recognition by hearing-impaired listeners. DESIGN: Experiment 1 evaluated fricative consonant recognition in quiet. Eight fricatives in /VCV/ context were used. Experiment 2 evaluated consonant recognition in quiet with 22 consonants. Six listeners with steeply sloping high-frequency sensorineural hearing loss participated in experiment 1. The same six listeners and three additional listeners with flat/mid-frequency sensorineural hearing loss participated in experiment 2. Two processing conditions-frequency lowering and conventional amplification-were tested in each experiment. Insertion gains based on the NAL-RP formula were provided up to 8000 Hz for each processing condition. In addition, speech stimuli were low-pass (LP) filtered at 1000, 1500, and 2000 Hz to evaluate the effect of lack of high-frequency speech information on consonant perception with and without frequency lowering. For these LP speech conditions, amplification was provided up to the cutoff frequencies. Overall percent correct and percent information transmission were calculated for each processing and speech condition. RESULTS: The frequency-lowering system provided significant benefit for the perception of fricative consonants and perception of the place-of-articulation feature for hearing-impaired listeners without affecting their perception of sonorant consonants and other consonant features (i.e., voicing and nasality). The improvement of fricative consonant perception was observed for both wideband and LP speech conditions for the steeply sloping hearing-loss listeners. CONCLUSIONS: The results indicate that individuals with unaidable hearing loss above 1000 to 2000 Hz would receive significant benefit with the system compared with conventional amplification for the perception of fricative consonants, and more importantly, significant benefit for the perception of place of articulation.


Assuntos
Auxiliares de Audição , Perda Auditiva Neurossensorial/terapia , Fonética , Acústica da Fala , Percepção da Fala/fisiologia , Adolescente , Adulto , Audiometria da Fala/métodos , Limiar Auditivo/fisiologia , Desenho de Equipamento , Feminino , Perda Auditiva Neurossensorial/fisiopatologia , Humanos , Masculino , Pessoa de Meia-Idade
16.
Ear Hear ; 33(5): 645-59, 2012.
Artigo em Inglês | MEDLINE | ID: mdl-22677814

RESUMO

OBJECTIVES: This study was designed to evaluate the contribution of temporal and spectral cues for timbre perception in listeners with a cochlear implant (CI) in one ear and low-frequency residual hearing in the contralateral ear (bimodal hearing), and listeners with two CIs. Specifically, it examined the relationship between timbre and speech perception in these two groups of listeners. It was hypothesized that, similar to speech recognition, temporal-envelope cues are dominant cues for timbre perception, and the reliance of spectral cues was reduced in both bimodal and bilateral CI users compared with that in normal-hearing listeners. It was further hypothesized that the patterns of results with regard to combined benefit would be similar between timbre and speech perception. DESIGN: Seven bimodal and five bilateral CI users participated. Sixteen stimuli that synthesized western musical instruments were used for the timbre-perception task. Sixteen consonants in the /aCa/ context and nine monophthongs in the /hVd/ context were used for the phoneme-recognition task. Each subject was tested on three listening conditions-individual device alone (single CI, or hearing aid [HA] alone) and combined use of devices (CI + HA, or 2CIs). For the timbre-perception task, each listener made judgments of dissimilarity between stimulus pairs. Multidimensional scaling analysis was performed to derive the coordinates of the dimensions that best fit the data. Correlational analyses were performed to relate the coordinates of each dimension and the temporal-envelope (impulsiveness) and spectral-envelope (spectral-centroid) features of the stimuli. For phoneme-recognition task, each listener identified the phoneme he or she heard by choosing an answer displayed on the computer screen. Overall percent correct phoneme-identification scores and percent information transmission for consonant and vowel features were calculated. RESULTS: There were strong correlations between impulsiveness and the first dimension (Dim 1) of the timbre space, but correlations between spectral centroid and the second dimension (Dim 2) were weak for all listening conditions for both groups of listeners. As a group, the combined use of devices did not significantly improve listeners' ability to perceive differences in musical timbre compared with the better single-device condition. Some of the bimodal and bilateral CI users showed a considerably strengthened correlation between spectral centroid and Dim 2 in the combined condition compared with a single CI or an HA. There was a lack of relationship between percent correct phoneme recognition and timbre perception for all listening conditions. However, there was a consistent pattern regarding the combined benefit between timbre perception and vowel recognition. In general, listeners who demonstrated combined benefit for vowel recognition also showed a considerable increase in correlation between spectral centroid and Dim 2 with the combined use of devices compared with the single-device conditions. Improved correlation was not evident for those who did not demonstrate combined benefit for vowel recognition. CONCLUSIONS: Similar to speech recognition, temporal envelope was a dominant cue for timbre perception in bimodal and bilateral CI users. In addition, there was a close relationship between timbre perception and vowel recognition with regard to combined benefit. The present findings suggest that speech recognition and timbre perception could be enhanced when listeners received different spectral cues from individual devices.


Assuntos
Percepção Auditiva , Implante Coclear/métodos , Percepção da Fala , Estimulação Acústica , Adolescente , Adulto , Idoso , Implantes Cocleares , Sinais (Psicologia) , Feminino , Testes Auditivos , Humanos , Masculino , Pessoa de Meia-Idade , Música , Fatores de Tempo
17.
Speech Commun ; 54(1): 147-160, 2012 Jan 01.
Artigo em Inglês | MEDLINE | ID: mdl-21927522

RESUMO

Frequency lowering is a form of signal processing designed to deliver high-frequency speech cues to the residual hearing region of a listener with a high-frequency hearing loss. While this processing technique has been shown to improve the intelligibility of fricative and affricate consonants, perception of place of articulation has remained a challenge for hearing-impaired listeners, especially when the bandwidth of the speech signal is reduced during the frequency-lowering processing. This paper describes a modified vocoder-based frequency-lowering system similar to one reported by Posen, Reed, and Braida (1993), with the goal of improving place-of-articulation perception by enhancing the spectral differences of fricative consonants. In this system, frequency lowering is conditional; it suppresses the processing whenever the high-frequency portion (>400 Hz) of the speech signal is a periodic signal. In addition, the system separates non-sonorant consonants into three classes based on the spectral information (slope and peak location) of fricative consonants. Results from a group of normal-hearing listeners with our modified system show improved perception of frication and affrication features, as well as place-of-articulation distinction, without degrading the perception of nasals and semivowels compared to low-pass filtering and Posen et al.'s system.

18.
J Speech Lang Hear Res ; 54(3): 959-80, 2011 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-21060139

RESUMO

PURPOSE: Improved speech recognition in binaurally combined acoustic-electric stimulation (otherwise known as bimodal hearing) could arise when listeners integrate speech cues from the acoustic and electric hearing. The aims of this study were (a) to identify speech cues extracted in electric hearing and residual acoustic hearing in the low-frequency region and (b) to investigate cochlear implant (CI) users' ability to integrate speech cues across frequencies. METHOD: Normal-hearing (NH) and CI subjects participated in consonant and vowel identification tasks. Each subject was tested in 3 listening conditions: CI alone (vocoder speech for NH), hearing aid (HA) alone (low-pass filtered speech for NH), and both. Integration ability for each subject was evaluated using a model of optimal integration--the PreLabeling integration model (Braida, 1991). RESULTS: Only a few CI listeners demonstrated bimodal benefit for phoneme identification in quiet. Speech cues extracted from the CI and the HA were highly redundant for consonants but were complementary for vowels. CI listeners also exhibited reduced integration ability for both consonant and vowel identification compared with their NH counterparts. CONCLUSION: These findings suggest that reduced bimodal benefits in CI listeners are due to insufficient complementary speech cues across ears, a decrease in integration ability, or both.


Assuntos
Implantes Cocleares , Surdez/fisiopatologia , Surdez/terapia , Auxiliares de Audição , Fonética , Percepção da Fala/fisiologia , Estimulação Acústica , Adulto , Terapia Combinada , Sinais (Psicologia) , Estimulação Elétrica , Feminino , Audição/fisiologia , Humanos , Masculino , Testes de Discriminação da Fala , Adulto Jovem
19.
J Speech Lang Hear Res ; 54(3): 981-94, 2011 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-21060140

RESUMO

PURPOSE: The purpose of this study was to investigate musical timbre perception in cochlear-implant (CI) listeners using a multidimensional scaling technique to derive a timbre space. Methods Sixteen stimuli that synthesized western musical instruments were used (McAdams, Winsberg, Donnadieu, De Soete, & Krimphoff, 1995). Eight CI listeners and 15 normal-hearing (NH) listeners participated. Each listener made judgments of dissimilarity between stimulus pairs. Acoustical analyses that characterized the temporal and spectral characteristics of each stimulus were performed to examine the psychophysical nature of each perceptual dimension. RESULTS: For NH listeners, the timbre space was best represented in three dimensions, one correlated with the temporal envelope (log-attack time) of the stimuli, one correlated with the spectral envelope (spectral centroid), and one correlated with the spectral fine structure (spectral irregularity) of the stimuli. The timbre space from CI listeners, however, was best represented by two dimensions, one correlated with temporal envelope features and the other weakly correlated with spectral envelope features of the stimuli. CONCLUSIONS: Temporal envelope was a dominant cue for timbre perception in CI listeners. Compared to NH listeners, CI listeners showed reduced reliance on both spectral envelope and spectral fine structure cues for timbre perception.


Assuntos
Percepção Auditiva/fisiologia , Implantes Cocleares , Surdez/fisiopatologia , Surdez/terapia , Música , Percepção do Tempo/fisiologia , Estimulação Acústica , Acústica , Adolescente , Adulto , Sinais (Psicologia) , Estimulação Elétrica , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Modelos Biológicos , Adulto Jovem
20.
J Acoust Soc Am ; 127(5): 3114-23, 2010 May.
Artigo em Inglês | MEDLINE | ID: mdl-21117760

RESUMO

A recent study reported that a group of Med-El COMBI 40+CI (cochlear implant) users could, in a forced-choice task, detect changes in the rate of a pulse train for rates higher than the 300 pps "upper limit" commonly reported in the literature [Kong, Y.-Y., et al. (2009). J. Acoust. Soc. Am. 125, 1649-1657]. The present study further investigated the upper limit of temporal pitch in the same group of CI users on three tasks [pitch ranking, rate discrimination, and multidimensional scaling (MDS)]. The patterns of results were consistent across the three tasks and all subjects could follow rate changes above 300 pps. Two subjects showed exceptional ability to follow temporal pitch change up to about 900 pps. Results from the MDS study indicated that, for the two listeners tested, changes in pulse rate over the range of 500-840 pps were perceived along a perceptual dimension that was orthogonal to the place of excitation. Some subjects showed a temporal pitch reversal at rates beyond their upper limit of pitch and some showed a reversal within a small range of rates below the upper limit. These results are discussed in relation to the possible neural bases for temporal pitch processing at high rates.


Assuntos
Implante Coclear/instrumentação , Implantes Cocleares , Correção de Deficiência Auditiva/psicologia , Sinais (Psicologia) , Pessoas com Deficiência Auditiva/reabilitação , Percepção da Altura Sonora , Percepção do Tempo , Estimulação Acústica , Adulto , Idoso , Audiometria , Comportamento de Escolha , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Pessoas com Deficiência Auditiva/psicologia , Psicoacústica , Detecção de Sinal Psicológico , Fatores de Tempo
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...